51 research outputs found

    Hyperprofile-based Computation Offloading for Mobile Edge Networks

    Full text link
    In recent studies, researchers have developed various computation offloading frameworks for bringing cloud services closer to the user via edge networks. Specifically, an edge device needs to offload computationally intensive tasks because of energy and processing constraints. These constraints present the challenge of identifying which edge nodes should receive tasks to reduce overall resource consumption. We propose a unique solution to this problem which incorporates elements from Knowledge-Defined Networking (KDN) to make intelligent predictions about offloading costs based on historical data. Each server instance can be represented in a multidimensional feature space where each dimension corresponds to a predicted metric. We compute features for a "hyperprofile" and position nodes based on the predicted costs of offloading a particular task. We then perform a k-Nearest Neighbor (kNN) query within the hyperprofile to select nodes for offloading computation. This paper formalizes our hyperprofile-based solution and explores the viability of using machine learning (ML) techniques to predict metrics useful for computation offloading. We also investigate the effects of using different distance metrics for the queries. Our results show various network metrics can be modeled accurately with regression, and there are circumstances where kNN queries using Euclidean distance as opposed to rectilinear distance is more favorable.Comment: 5 pages, NSF REU Site publicatio

    Investigating the Use of Recurrent Neural Networks in Modeling Guitar Distortion Effects

    Get PDF
    Guitar players have been modifying their guitar tone with audio effects ever since the mid-20th century. Traditionally, these effects have been achieved by passing a guitar signal through a series of electronic circuits which modify the signal to produce the desired audio effect. With advances in computer technology, audio “plugins” have been created to produce audio effects digitally through programming algorithms. More recently, machine learning researchers have been exploring the use of neural networks to replicate and produce audio effects initially created by analog and digital effects units. Recurrent Neural Networks have proven to be exceptional at modeling audio effects such as overdrive, distortion, and compression. This research aims to analyze the inner workings of these neural networks and how they can replicate audio effects to such a high caliber. A Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) will also be used to model a distortion effect unit and compare the results they yield with the original audio device modeled

    Using Neural Networks to Model Guitar Distortion

    Get PDF
    Guitar players have been modifying their guitar tone with audio effects ever since the mid-20th century. Traditionally, these effects have been achieved by passing a guitar signal through a series of electronic circuits which modify the signal to produce the desired audio effect. With advances in computer technology, audio “plugins” have been created to produce audio effects digitally through programming algorithms. More recently, machine learning researchers have been exploring the use of neural networks to produce audio effects that yield strikingly similar results to their analog counterparts. Recurrent Neural Networks and Temporal Convolutional Networks have proven to be exceptional at modeling audio effects such as overdrive, distortion, and compression. The goal of this research is to analyze the inner workings of these neural networks and how they can replicate audio effects to such a high caliber. Some of these networks will also be used to model a distortion effect and compare the results they yield with the original audio device modeled

    \u27Follow the Data\u27 — What Data Says About Real-world Behavior in Commons Problems

    Get PDF
    We test the game-theoretic foundations of common-pool resources using an individual-level dataset of groundwater usage that accounts for 3% of US irrigated agriculture. Using necessary and sufficient revealed preference tests for dynamic games, we find: (i) a rejection of the standard game-theoretic arguments based on strategic substitutes, and instead (ii) support for models building on reciprocity-like behavior and strategic complements. By estimating strategic interactions directly, we find that reciprocity-like interactions drive behavior more than market and climate trends. Taken together, we take a step toward developing more realistic models to understand groundwater usage, and related issues pertaining to tragedy of the commons and commons governance

    Properly Learning Decision Trees with Queries Is NP-Hard

    Full text link
    We prove that it is NP-hard to properly PAC learn decision trees with queries, resolving a longstanding open problem in learning theory (Bshouty 1993; Guijarro-Lavin-Raghavan 1999; Mehta-Raghavan 2002; Feldman 2016). While there has been a long line of work, dating back to (Pitt-Valiant 1988), establishing the hardness of properly learning decision trees from random examples, the more challenging setting of query learners necessitates different techniques and there were no previous lower bounds. En route to our main result, we simplify and strengthen the best known lower bounds for a different problem of Decision Tree Minimization (Zantema-Bodlaender 2000; Sieling 2003). On a technical level, we introduce the notion of hardness distillation, which we study for decision tree complexity but can be considered for any complexity measure: for a function that requires large decision trees, we give a general method for identifying a small set of inputs that is responsible for its complexity. Our technique even rules out query learners that are allowed constant error. This contrasts with existing lower bounds for the setting of random examples which only hold for inverse-polynomial error. Our result, taken together with a recent almost-polynomial time query algorithm for properly learning decision trees under the uniform distribution (Blanc-Lange-Qiao-Tan 2022), demonstrates the dramatic impact of distributional assumptions on the problem.Comment: 41 pages, 10 figures, FOCS 202

    Bostonia

    Full text link
    Founded in 1900, Bostonia magazine is Boston University's main alumni publication, which covers alumni and student life, as well as university activities, events, and programs

    A Strong Composition Theorem for Junta Complexity and the Boosting of Property Testers

    Full text link
    We prove a strong composition theorem for junta complexity and show how such theorems can be used to generically boost the performance of property testers. The ε\varepsilon-approximate junta complexity of a function ff is the smallest integer rr such that ff is ε\varepsilon-close to a function that depends only on rr variables. A strong composition theorem states that if ff has large ε\varepsilon-approximate junta complexity, then gfg \circ f has even larger ε\varepsilon'-approximate junta complexity, even for εε\varepsilon' \gg \varepsilon. We develop a fairly complete understanding of this behavior, proving that the junta complexity of gfg \circ f is characterized by that of ff along with the multivariate noise sensitivity of gg. For the important case of symmetric functions gg, we relate their multivariate noise sensitivity to the simpler and well-studied case of univariate noise sensitivity. We then show how strong composition theorems yield boosting algorithms for property testers: with a strong composition theorem for any class of functions, a large-distance tester for that class is immediately upgraded into one for small distances. Combining our contributions yields a booster for junta testers, and with it new implications for junta testing. This is the first boosting-type result in property testing, and we hope that the connection to composition theorems adds compelling motivation to the study of both topics.Comment: 44 pages, 1 figure, FOCS 202

    A Query-Optimal Algorithm for Finding Counterfactuals

    Full text link
    We design an algorithm for finding counterfactuals with strong theoretical guarantees on its performance. For any monotone model f:Xd{0,1}f : X^d \to \{0,1\} and instance xx^\star, our algorithm makes S(f)O(Δf(x))logd {S(f)^{O(\Delta_f(x^\star))}\cdot \log d} queries to ff and returns {an {\sl optimal}} counterfactual for xx^\star: a nearest instance xx' to xx^\star for which f(x)f(x)f(x')\ne f(x^\star). Here S(f)S(f) is the sensitivity of ff, a discrete analogue of the Lipschitz constant, and Δf(x)\Delta_f(x^\star) is the distance from xx^\star to its nearest counterfactuals. The previous best known query complexity was dO(Δf(x))d^{\,O(\Delta_f(x^\star))}, achievable by brute-force local search. We further prove a lower bound of S(f)Ω(Δf(x))+Ω(logd)S(f)^{\Omega(\Delta_f(x^\star))} + \Omega(\log d) on the query complexity of any algorithm, thereby showing that the guarantees of our algorithm are essentially optimal.Comment: 22 pages, ICML 202

    Certification with an NP Oracle

    Get PDF
    In the certification problem, the algorithm is given a function ff with certificate complexity kk and an input xx^\star, and the goal is to find a certificate of size poly(k)\le \text{poly}(k) for ff's value at xx^\star. This problem is in NPNP\mathsf{NP}^{\mathsf{NP}}, and assuming PNP\mathsf{P} \ne \mathsf{NP}, is not in P\mathsf{P}. Prior works, dating back to Valiant in 1984, have therefore sought to design efficient algorithms by imposing assumptions on ff such as monotonicity. Our first result is a BPPNP\mathsf{BPP}^{\mathsf{NP}} algorithm for the general problem. The key ingredient is a new notion of the balanced influence of variables, a natural variant of influence that corrects for the bias of the function. Balanced influences can be accurately estimated via uniform generation, and classic BPPNP\mathsf{BPP}^{\mathsf{NP}} algorithms are known for the latter task. We then consider certification with stricter instance-wise guarantees: for each xx^\star, find a certificate whose size scales with that of the smallest certificate for xx^\star. In sharp contrast with our first result, we show that this problem is NPNP\mathsf{NP}^{\mathsf{NP}}-hard even to approximate. We obtain an optimal inapproximability ratio, adding to a small handful of problems in the higher levels of the polynomial hierarchy for which optimal inapproximability is known. Our proof involves the novel use of bit-fixing dispersers for gap amplification.Comment: 25 pages, 2 figures, ITCS 202
    corecore